divide-and-conquer method
- Asia > Middle East > Jordan (0.04)
- North America > United States > Pennsylvania > Philadelphia County > Philadelphia (0.04)
- Asia > China > Guangdong Province > Shenzhen (0.04)
A Divide-and-Conquer Method for Sparse Inverse Covariance Estimation
In this paper, we consider the \ell_1 regularized sparse inverse covariance matrix estimation problem with a very large number of variables. Even in the face of this high dimensionality, and with limited number of samples, recent work has shown this estimator to have strong statistical guarantees in recovering the true structure of the sparse inverse covariance matrix, or alternatively the underlying graph structure of the corresponding Gaussian Markov Random Field. Our proposed algorithm divides the problem into smaller sub-problems, and uses the solutions of the sub-problems to build a good approximation for the original problem. We derive a bound on the distance of the approximate solution to the true solution. Based on this bound, we propose a clustering algorithm that attempts to minimize this bound, and in practice, is able to find effective partitions of the variables.
A Divide-and-Conquer Method for Sparse Inverse Covariance Estimation
Hsieh, Cho-jui, Banerjee, Arindam, Dhillon, Inderjit S., Ravikumar, Pradeep K.
In this paper, we consider the $\ell_1$ regularized sparse inverse covariance matrix estimation problem with a very large number of variables. Even in the face of this high dimensionality, and with limited number of samples, recent work has shown this estimator to have strong statistical guarantees in recovering the true structure of the sparse inverse covariance matrix, or alternatively the underlying graph structure of the corresponding Gaussian Markov Random Field. Our proposed algorithm divides the problem into smaller sub-problems, and uses the solutions of the sub-problems to build a good approximation for the original problem. We derive a bound on the distance of the approximate solution to the true solution. Based on this bound, we propose a clustering algorithm that attempts to minimize this bound, and in practice, is able to find effective partitions of the variables.
Sorry humans, Microsoft's AI is the first to reach a perfect Ms. Pac-Man score
At long last, the perfect score for arcade classic Ms. Pac-Man has been achieved, though not by a human. Maluuba -- a deep learning team acquired by Microsoft in January -- has created an AI system that's learned how to reach the game's maximum point value of 999,900 on Atari 2600, using a unique combination of reinforcement learning with a divide-and-conquer method. AI researchers have a documented penchant for using video games to test machine learning; they better mimic real-world chaos in a controlled environment versus more static games like chess. In 2015, Google's DeepMind AI was able to learn how to master 49 Atari games using reinforcement learning, which provides positive or negative feedback each time the AI attempts to solve a problem. Though AI has conquered a wealth of retro games, Ms. Pac-Man has remained elusive for years, due to the game's intentional lack of predictability.